27 research outputs found

    Multiprocessor Global Scheduling on Frame-Based DVFS Systems

    Full text link
    In this ongoing work, we are interested in multiprocessor energy efficient systems, where task durations are not known in advance, but are know stochastically. More precisely, we consider global scheduling algorithms for frame-based multiprocessor stochastic DVFS (Dynamic Voltage and Frequency Scaling) systems. Moreover, we consider processors with a discrete set of available frequencies

    Gang FTP scheduling of periodic and parallel rigid real-time tasks

    Full text link
    In this paper we consider the scheduling of periodic and parallel rigid tasks. We provide (and prove correct) an exact schedulability test for Fixed Task Priority (FTP) Gang scheduler sub-classes: Parallelism Monotonic, Idling, Limited Gang, and Limited Slack Reclaiming. Additionally, we study the predictability of our schedulers: we show that Gang FJP schedulers are not predictable and we identify several sub-classes which are actually predictable. Moreover, we extend the definition of rigid, moldable and malleable jobs to recurrent tasks

    Discrete Frequency Selection of Frame-Based Stochastic Real-Time Tasks

    Full text link
    Energy-efficient real-time task scheduling has been actively explored in the past decade. Different from the past work, this paper considers schedulability conditions for stochastic real-time tasks. A schedulability condition is first presented for frame-based stochastic real-time tasks, and several algorithms are also examined to check the schedulability of a given strategy. An approach is then proposed based on the schedulability condition to adapt a continuous-speed-based method to a discrete-speed system. The approach is able to stay as close as possible to the continuous-speed-based method, but still guaranteeing the schedulability. It is shown by simulations that the energy saving can be more than 20% for some system configurationsComment: 10 page

    Can we use perfect simulation for non-monotonic Markovian systems ?

    No full text
    International audienceSimulation approaches are alternative methods to estimate the stationary be- havior of stochastic systems by providing samples distributed according to the stationary distribution, even when it is impossible to compute this distribution numerically. Propp and Wilson used a backward coupling to derive a simu- lation algorithm providing perfect sampling (i.e. which distribution is exactly stationary) of the state of discrete time finite Markov chains. Here, we adapt their algorithm by showing that, under mild assumptions, backward coupling can be used over two simulation trajectories only

    U-EDF: An Unfair But Optimal Multiprocessor Scheduling Algorithm for Sporadic Tasks

    Full text link
    A multiprocessor scheduling algorithm named U-EDF, was presented in [1] for the scheduling of periodic tasks with implicit deadlines. It was claimed that U-EDF is optimal for periodic tasks (i.e. it can meet all deadlines of every schedulable task set) and extensive simulations showed a drastic improvement in the number of task preemptions and migrations in comparison to state-of-the-art optimal algorithms. However, there was no proof of its optimality and U-EDF was not designed to schedule sporadic tasks. In this work, we propose a generalization of U-EDF for the scheduling of sporadic tasks with implicit deadlines, and we prove its optimality. Contrarily to all other existing optimal multiprocessor scheduling algorithms for sporadic tasks, U-EDF is not based on the fairness property. Instead, it extends the main principles of EDF so that it achieves optimality while benefiting from a substantial reduction in the number of preemptions and migrations. © 2012 IEEE.SCOPUS: cp.pinfo:eu-repo/semantics/publishe

    Techniques Optimizing the Number of Processors to Schedule Multi-threaded Tasks

    Full text link
    These last years, we have witnessed a dramatic increase in the number of cores available in computational platforms. Concurrently, a new coding paradigm dividing tasks into smaller execution instances called threads, was developed to take advantage of the inherent parallelism of multiprocessor platforms. However, only few methods were proposed to efficiently schedule hard real-time multi-threaded tasks on multiprocessor. In this paper, we propose techniques optimizing the number of processors needed to schedule such sporadic parallel tasks with constrained deadlines. We first define an optimization problem determining, for each thread, an intermediate (artificial) deadline minimizing the number of processors needed to schedule the whole task set. The scheduling algorithm can then schedule threads as if they were independent sequential sporadic tasks. The second contribution is an efficient and nevertheless optimal algorithm that can be executed online to determine the thread's deadlines. Hence, it can be used in dynamic systems were all tasks and their characteristics are not known a priori. We finally prove that our techniques achieve a resource augmentation bound of 2 when the threads are scheduled with algorithms such as U-EDF, PD2, LLREF, DP-Wrap, etc. © 2012 IEEE.SCOPUS: cp.pinfo:eu-repo/semantics/publishe

    Approche stochastique d'heuristiques de méta-ordonnancement dans les grilles de calcul

    No full text
    Computational Grids are large infrastructures composed of several components such as clusters, or massively parallel machines, generally spread across a country or the world, linked together through some network such as Internet, and allowing a transparent access to any resource. Grids have become unavoidable for a large part of the scientific community requiring computational power such as high-energy physics, bioinformatics or earth observation. Large projects are emerging, often at an international level, but even if Grids are on the way of being efficient and user-friendly systems, computer scientists and engineers still have a huge amount of work to do in order to improve their efficiency. Amongst a large number of problems to solve or to improve upon, the problem of scheduling the work and balancing the load is of first importance.<p><p><p>This work concentrates on the way the work is dispatched on such systems, and mainly on how the first level of scheduling – generally name brokering, or meta-sheduling – is performed. We deeply analyze the behavior of popular strategies, compare their efficiency, and propose a new very efficient brokering policy providing notable performances, attested by the large number of simulations we performed and provided in the document.<p><p><p>The work is mainly split in two parts. After introducing the mathematical framework on which the following of the manuscript is based, we study systems where the grid brokering is done without any feed-back information, i.e. without knowing the current state of the clusters when the resource broker – the grid component receiving jobs from clients and performing the brokering – makes its decision. We show here how a computational grid behaves if the brokering is done is such a way that each cluster receives a quantity of work proportional to its computational capacity.<p><p><p>The second part of this work is rather independent from the first one, and consists in the presentation of a brokering strategy, based on Whittle's indices, trying to minimize as much as possible the average sojourn time of jobs. We show how efficient the proposed strategy is for computational grids, compared to the ones popular in production systems. We also show its robustness to several parameter changes, and provide several very efficient algorithms allowing to make the required computations for this index policy. We finally extend our model in several directions.<p>Doctorat en sciences, Spécialisation Informatiqueinfo:eu-repo/semantics/nonPublishe

    Email Address Reliability: Maintenir la qualité des adresses e-mail

    No full text
    Avec la dématérialisation de l’information et la mise en place croissante de synergies entre administrations, employeurs, entreprises et citoyens, la qualité des adresses e-mail devient stratégique. En effet, une bonne gestion de celles-ci peut, dans le cadre de l’egovernment, contribuer à l’amélioration des services rendus et à la réduction des coûts. C’est le cas lorsque les adresses e-mail sont utilisées en vue de l’envoi de notifications, même après authentification, dans le cadre des recommandés électroniques, par exemple. Si les adresses e-mail sont incorrectes, les notifications des envois recommandés doivent s’effectuer par voie postale, après traitement éventuel des cas erronés. Cela peut accroître les coûts cumulés sur 5 ans de plusieurs millions d’euros, selon la taille de la base de données. À ces éléments s’ajoutent les gains indirects associés (qui sont mentionnés également dans le secteur privé et « marketing ») :respect de la législation, service rendu au citoyen et crédibilité dans les campagnes de communication. Dans de nombreux pays, (Danemark, Suède, Norvège, Canada, …), le recours à l’adresse e-mail authentifiée dans le cadre des échanges entre administrations et citoyens se généralise au sein de l’egovernment. En 2012, le ROI sur 15 ans d’une telle approche est estimé en Norvège à environ 250 millions d’euros. Largement utilisées de nos jours et pour un certain temps encore, les adresses e-mail se caractérisent toutefois par un cumul d’incertitudes :qu’il s’agisse de la volatilité des usages, de la dynamique des noms de domaines ou de la présence de syntaxes non standards. En vue d’en assurer la maîtrise, l’exposé présentera :■Les éléments syntaxiques, de validation (tests d’existence) et de « data matching » applicables en batch et « on line », selon les cas ;■Un ensemble de bonnes pratiques et une organisation en vue de maintenir la qualité d’une vaste source d’adresses e-mail dans le temps.12/2013Deliverable2013/trim4/01info:eu-repo/semantics/publishe

    Modeling resubmission in unreliable grids: The bottom-up approach

    No full text
    Failure is an ordinary characteristic of large-scale distributed environments. Resubmission is a general strategy employed to cope with failures in grids. Here, we analytically and experimentally study resubmission in the case of random brokering (jobs are dispatched to a computing elements with a probability proportional to its computing power). We compare two cases when jobs are resubmitted to the broker or to the computing element. Results show that resubmit to the broker is a better strategy. Our approach is different from most existing race-based one as it is a bottom-up one: we start from a simple model of a grid and derive its characteristics. © 2010 Springer-Verlag.SCOPUS: cp.kSCOPUS: cp.kinfo:eu-repo/semantics/publishe
    corecore